Goto

Collaborating Authors

 new programming language


AIhub coffee corner: Agentic AI

AIHub

This month we tackle the topic of agentic AI. Joining the conversation this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), and Michael Littman (Brown University). Why is it taking off? Sanmay, perhaps you could kick off with what you noticed at AAMAS [the Autonomous Agents and Multiagent Systems conference]? Sanmay Das: It was very interesting because obviously there's suddenly been an enormous interest in what an agent is and in the development of agentic AI.


Beyond Code: The Multidimensional Impacts of Large Language Models in Software Development

Bonabi, Sardar, Bana, Sarah, Gurbaxani, Vijay, Nian, Tingting

arXiv.org Artificial Intelligence

Large language models (LLMs) are poised to significantly impact software development, especially in the Open - Source Software (OSS) sector. To understand this impact, we first outline the mechanisms through which LLMs may influence OSS through code development, collaborative knowledge transfer, and skill development . W e then e mpirically examine how LLMs affect OSS developers' work in these three key areas . Leveraging a natural experiment from a temporary ChatGPT ban in Italy, we employ a Difference - in - Differences framework with two - way fixed effects to analyze data from all OSS developers on GitHub in three similar countries -- Italy, France, and Portugal -- totaling 88,022 users. We find that access to ChatGPT increases developer productivity by 6.4%, knowledge sharing by 9.6%, and skill acquisition by 8.4%. These benefits vary significantly by user experience level: n ovice developers primarily experience productivity gains, whereas more experienced developers benefit more from improved knowledge sharing and accelerated skill acquisition. In addition, we f ind that LLM - assisted learning is highly context - dependent, with the greatest benefits observed in technically complex, fragmented, or rapidly evolving contexts . We show that the productivity effects of LLMs extend beyond direct code generation to include enhanced collaborative learning and knowledge exchange among developers -- dynamics that are essential for gaining a holistic understanding of LLMs' impact in OSS. Our findings offer critical managerial implications: strategically deploying LLMs can accelerat e novice developers' onboarding and productivity, empower intermediate developers to foster knowledge sharing and collaboration, and support rapid skill acquisition -- together enhancing long - term organizational productivity and agility.


A new programming language for high-performance computers

#artificialintelligence

High-performance computing is needed for an ever-growing number of tasks -- such as image processing or various deep learning applications on neural nets -- where one must plow through immense piles of data, and do so reasonably quickly, or else it could take ridiculous amounts of time. It's widely believed that, in carrying out operations of this sort, there are unavoidable trade-offs between speed and reliability. If speed is the top priority, according to this view, then reliability will likely suffer, and vice versa. However, a team of researchers, based mainly at MIT, is calling that notion into question, claiming that one can, in fact, have it all. With the new programming language, which they've written specifically for high-performance computing, says Amanda Liu, a second-year PhD student at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), "speed and correctness do not have to compete. Instead, they can go together, hand-in-hand, in the programs we write."


Introducing Bean Machine

#artificialintelligence

The final part of my Life series is still in the works but I need to interrupt that series with some exciting news. I will likely do a whole series on Bean Machine later on this autumn, but for today let me just give you the brief overview should you not want to go through the paper. As the paper's title says, Bean Machine is a Probabilistic Programming Language (PPL). For a detailed introduction to PPLs you should read my "Fixing Random" series, where I show how we could greatly improve support for analysis of randomness in .NET by both adding types to the base class library and by adding language features to a language like C#. If you don't want to read that 40 post introduction, here's the TLDR.


Microsoft: Bosque is a new programming language built for AI in the cloud ZDNet

#artificialintelligence

Microsoft is ready to show off the latest improvements it's made to a new experimental programming language for the cloud called Bosque. Bosque is being developed by a team at Microsoft Research led by principal engineer Mark Marron, who describes it as an "experiment in regularized design for a machine-assisted rapid and reliable software development lifecycle". The project borrows heavily from TypeScript and machine learning for software development in the cloud. The Bosque programming language aims to cater to cloud developers with knowledge of Microsoft's TypeScript JavaScript superset and Node.js, the widely-used runtime for executing JavaScript code outside a browser. In a paper Marron published last year, he outlined how Bosque's regularized programming model could lead to a massive boost in programmer productivity, on par with gains made after structured programming – a term defined by Dutch computer programming pioneer Edsger Wybe Dijkstra – took off in the 1970s and spawned a new generation of compilers and integrated development environment (IDE) tools.


6 Stages of Learning a New Programming Language

#artificialintelligence

When you are learning "core" concepts in a programming language, do you frequently make a list of questions to ask? I usually find that I digress a lot. That is, I tend to follow my train of thought down the line until the very end. So I started with concept A about Python, then ended up googling a whole lot about object-oriented programming in Python, which led me to scope out a potential project to do later. Through this process, I bookmarked syntax conventions, object-oriented programming concepts, and a list of frequently used data structures.


New AI Programming Language Reportedly Makes AI Easier for Everyone

#artificialintelligence

As if we needed it to be easier for Skynet to be created. A team of MIT researchers has created an AI programming language that, they say, makes it easier for novices to start programming artificial intelligence. Not only that, it will help experts further advance the field. The new programming language is called'Gen' and it is detailed in an MIT paper, titled Gen: a general-purpose probabilistic programming system with programmable inference. In their paper, which was revealed at the Programming Language Design and Implementation conference last week, the researchers describe Gen, their new probabilistic-programming system.


AI Weekly: Experts say OpenAI's controversial model is a potential threat to society and science

#artificialintelligence

Last week, OpenAI released GPT-2, a conversational AI system that quickly became controversial. Without domain-specific data, GPT-2 achieves state-of-the-art performance in seven of eight natural language understanding benchmarks for things like reading comprehension and answering questions. A paper and some code were released when the unsupervised model, trained on 40GB of internet text, went public, but the entirety of the model wasn't released due to concerns by its creators about "malicious applications of the technology," alluding to things such as automated generation of fake news. As a result, the wider community cannot fully verify or replicate the results. Some, including Keras deep learning library founder François Chollet, called the OpenAI GPT-2 release (or lack thereof) an irresponsible, fear mongering PR tactic and publicity stunt.


Facebook's chief AI scientist: Deep learning may need a new programming language

#artificialintelligence

Deep learning may need a new programming language that's more flexible and easier to work with than Python, Facebook AI Research director Yann LeCun said today. It's not yet clear if such a language is necessary, but the possibility runs against very entrenched desires from researchers and engineers, he said. LeCun has worked with neural networks since the 1980s. "There are several projects at Google, Facebook, and other places to kind of design such a compiled language that can be efficient for deep learning, but it's not clear at all that the community will follow, because people just want to use Python," LeCun said in a phone call with VentureBeat. "The question now is, is that a valid approach?"


Facebook's chief AI scientist: Deep learning may need a new programming language

#artificialintelligence

Deep learning may need a new programming language that's more flexible and easier to work with than Python, Facebook AI Research director Yann LeCun said today. It's not yet clear if such a language is necessary, but the possibility runs against very entrenched desires from researchers and engineers, he said. LeCun has worked with neural networks since the 1980s. "There are several projects at Google, Facebook, and other places to kind of design such a compiled language that can be efficient for deep learning, but it's not clear at all that the community will follow, because people just want to use Python," LeCun said in a phone call with VentureBeat. "The question now is, is that a valid approach?"